Natural language generation (NLG) is a critical component in a spokendialogue system. This paper presents a Recurrent Neural Network basedEncoder-Decoder architecture, in which an LSTM-based decoder is introduced toselect, aggregate semantic elements produced by an attention mechanism over theinput elements, and to produce the required utterances. The proposed generatorcan be jointly trained both sentence planning and surface realization toproduce natural language sentences. The proposed model was extensivelyevaluated on four different NLG datasets. The experimental results showed thatthe proposed generators not only consistently outperform the previous methodsacross all the NLG domains but also show an ability to generalize from a new,unseen domain and learn from multi-domain datasets.
展开▼